109 research outputs found

    Efficient Average-Case Population Recovery in the Presence of Insertions and Deletions

    Get PDF
    A number of recent works have considered the trace reconstruction problem, in which an unknown source string x in {0,1}^n is transmitted through a probabilistic channel which may randomly delete coordinates or insert random bits, resulting in a trace of x. The goal is to reconstruct the original string x from independent traces of x. While the asymptotically best algorithms known for worst-case strings use exp(O(n^{1/3})) traces [De et al., 2017; Fedor Nazarov and Yuval Peres, 2017], several highly efficient algorithms are known [Yuval Peres and Alex Zhai, 2017; Nina Holden et al., 2018] for the average-case version of the problem, in which the source string x is chosen uniformly at random from {0,1}^n. In this paper we consider a generalization of the above-described average-case trace reconstruction problem, which we call average-case population recovery in the presence of insertions and deletions. In this problem, rather than a single unknown source string there is an unknown distribution over s unknown source strings x^1,...,x^s in {0,1}^n, and each sample given to the algorithm is independently generated by drawing some x^i from this distribution and returning an independent trace of x^i. Building on the results of [Yuval Peres and Alex Zhai, 2017] and [Nina Holden et al., 2018], we give an efficient algorithm for the average-case population recovery problem in the presence of insertions and deletions. For any support size 1 <= s <= exp(Theta(n^{1/3})), for a 1-o(1) fraction of all s-element support sets {x^1,...,x^s} subset {0,1}^n, for every distribution D supported on {x^1,...,x^s}, our algorithm can efficiently recover D up to total variation distance at most epsilon with high probability, given access to independent traces of independent draws from D. The running time of our algorithm is poly(n,s,1/epsilon) and its sample complexity is poly (s,1/epsilon,exp(log^{1/3} n)). This polynomial dependence on the support size s is in sharp contrast with the worst-case version of the problem (when x^1,...,x^s may be any strings in {0,1}^n), in which the sample complexity of the most efficient known algorithm [Frank Ban et al., 2019] is doubly exponential in s

    Determination of Weak Amplitudes using Bose Symmetry and Dalitz Plots

    Full text link
    We present a new method using Dalitz plot and Bose symmetry of pions that allows the complete determination of the magnitudes and phases of weak decay amplitudes. We apply the method to process like B->K^* pi, with the subsequent decay of K^* -> K pi. Our approach enables the additional measurement of an isospin amplitude without any theoretical assumption. This advance will help in measuring weak phase and probing for new physics beyond standard model with fewer assumptions.Comment: 5 pages, 1 figure; Title changed; Conclusions unchanged; Accepted for publication in Physical Review Letter

    Mathematical models of malaria - a review

    Get PDF
    Mathematical models have been used to provide an explicit framework for understanding malaria transmission dynamics in human population for over 100 years. With the disease still thriving and threatening to be a major source of death and disability due to changed environmental and socio-economic conditions, it is necessary to make a critical assessment of the existing models, and study their evolution and efficacy in describing the host-parasite biology. In this article, starting from the basic Ross model, the key mathematical models and their underlying features, based on their specific contributions in the understanding of spread and transmission of malaria have been discussed. The first aim of this article is to develop, starting from the basic models, a hierarchical structure of a range of deterministic models of different levels of complexity. The second objective is to elaborate, using some of the representative mathematical models, the evolution of modelling strategies to describe malaria incidence by including the critical features of host-vector-parasite interactions. Emphasis is more on the evolution of the deterministic differential equation based epidemiological compartment models with a brief discussion on data based statistical models. In this comprehensive survey, the approach has been to summarize the modelling activity in this area so that it helps reach a wider range of researchers working on epidemiology, transmission, and other aspects of malaria. This may facilitate the mathematicians to further develop suitable models in this direction relevant to the present scenario, and help the biologists and public health personnel to adopt better understanding of the modelling strategies to control the diseas

    Accurate measurement of the D0-D0bar mixing parameters

    Full text link
    We propose a new method to determine the mass and width differences of the two D meson mass-eigenstates as well as the CP violating parameters associated with D^0-\bar{D}^0 mixing. We show that an accurate measurement of all the mixing parameters is possible for an arbitrary CP violating phase, by combining observables from a time dependent study of D decays to a doubly Cabibbo suppressed mode with information from a CP eigenstate. As an example we consider D^0-> K^{*0} \pi^0 decays where the K^{*0} is reconstructed in both K^+\pi^- and K_S\pi^0. We also show that decays to the CP eigenstate D-> K^+K^- together with D-> K^+\pi^- can be used to extract all the mixing parameters. A combined analysis using D^0-> K^{*0} \pi^0 and D-> K^+K^- can also be used to reduce the ambiguity in the determination of parameters.Comment: 4 pages, minor changes, few references adde

    Differential Fault Analysis of Rectangle-80

    Get PDF
    We present various differential fault attack schemes for the RECTANGLE-80 and demonstrate how initially we started from a 80-bit fault to a single word fault scheme. This was mainly due to a differential vulnerability in the S-box of RECTANGLE as a result of which the exhaustive search space for the key reduces from 2802^{80} to 2322^{32}. We have also presented a key schedule attack that is a variant of the single fault scheme, exploiting the same vulnerability and reduces the search space to 2402^{40}. The paper concludes with the simulation results for the single word fault scheme followed by countermeasures

    Estimating the Longest Increasing Subsequence in Nearly Optimal Time

    Full text link
    Longest Increasing Subsequence (LIS) is a fundamental statistic of a sequence, and has been studied for decades. While the LIS of a sequence of length nn can be computed exactly in time O(nlogn)O(n\log n), the complexity of estimating the (length of the) LIS in sublinear time, especially when LIS n\ll n, is still open. We show that for any integer nn and any λ=o(1)\lambda = o(1), there exists a (randomized) non-adaptive algorithm that, given a sequence of length nn with LIS λn\ge \lambda n, approximates the LIS up to a factor of 1/λo(1)1/\lambda^{o(1)} in no(1)/λn^{o(1)} / \lambda time. Our algorithm improves upon prior work substantially in terms of both approximation and run-time: (i) we provide the first sub-polynomial approximation for LIS in sub-linear time; and (ii) our run-time complexity essentially matches the trivial sample complexity lower bound of Ω(1/λ)\Omega(1/\lambda), which is required to obtain any non-trivial approximation of the LIS. As part of our solution, we develop two novel ideas which may be of independent interest: First, we define a new Genuine-LIS problem, where each sequence element may either be genuine or corrupted. In this model, the user receives unrestricted access to actual sequence, but does not know apriori which elements are genuine. The goal is to estimate the LIS using genuine elements only, with the minimal number of "genuiness tests". The second idea, Precision Forest, enables accurate estimations for composition of general functions from "coarse" (sub-)estimates. Precision Forest essentially generalizes classical precision sampling, which works only for summations. As a central tool, the Precision Forest is initially pre-processed on a set of samples, which thereafter is repeatedly reused by multiple sub-parts of the algorithm, improving their amortized complexity.Comment: Full version of FOCS 2022 pape
    corecore